37 research outputs found
Recommended from our members
Specification and Analysis of Resource Utilization Policies for Human-Intensive Systems
Contemporary systems often require the effective support of many types of resources, each governed by complex utilization policies. Sound management of these resources plays a key role in assuring that these systems achieve their key goals. To help system developers make sound resource management decisions, I provide a resource utilization policy specification and analysis framework for (1) specifying very diverse kinds of resources and their potentially complex resource utilization policies, (2) dynamically evaluating the policies’ effects on the outcomes achieved by systems utilizing the resources, and (3) formally verifying various kinds of properties of these systems.
Resource utilization policies range from simple, e.g., first-in-first-out, to extremely complex, responding to changes in system environment, state, and stimuli. Further, policies may at times conflict with each other, requiring conflict resolution strategies that add extra complexity. Prior specification approaches rely on relatively simple resource models that prevent the specification of complex utilization and conflict resolution policies. My approach (1) separates resource utilization policy concerns from resource characteristic and request specifications, (2) creates an expressive specification notation for constraint policies, and (3) creates a resource constraint conflict resolution capability. My approach enables creating specifications of policies that are sufficiently precise and detailed to support static and dynamic analyses of how these policies affect the properties of systems constrained or governed by these policies.
I provide a process- and resource-aware discrete-event simulator for simulating system executions that adhere to policies of resource utilization. The simulator integrates the existing JSim simulation engine with a separate resource management system. The separate architectural component makes it easy to keep track of resource utilization traces during a simulation run. My simulation framework facilitates considerable flexibility in the evaluation of diverse resource management decisions and powerful dynamic analyses.
Dynamic verification through simulation is inherently limited because of the impossibility of exhaustive simulation of all scenarios. I complement this approach with static verification. Prior static resource analysis has supported the verification only of relatively simple resource utilization policies. My research utilizes powerful model checking techniques, building on the existing FLAVERS model checking tool, to verify properties of complex systems that are also verified to conform to complex resource utilization policies. My research demonstrates how to use systems such as FLAVERS to verify adherence to complex resource utilization policies as well as overall system properties, such as the absence of resource leak and resource deadlock.
I evaluated my approach working with a hospital emergency department domain expert, using detailed, expert-developed models of the processes and resource utilization policies of an emergency department. In doing this, my research demonstrates how my framework can be effective in guiding the domain expert towards making sound decisions about policies for the management of hospital resources, while also providing rigorously-based assurances that the guidance is reliable and well-founded.
My research makes the following contributions: (1) a specification language for resources and resource utilization policies for human-intensive systems, (2) a process- and resource-aware discrete-event simulation engine that creates simulations that adhere to the resource utilization policies, allowing for the dynamic evaluation of resource utilization policies, (3) a process- and resource-aware model checking technique that formally verifies system properties and adherence to resource utilization policies, and (4) validated and verified specifications of an emergency department healthcare system, demonstrating the utility of my approach
Specification and Analysis of Resource Utilization Policies for Human-Intensive Systems (Extended Abstract)
Societal processes, such as those used in healthcare, typically depend on the effective utilization of resources, both human and non-human. Sound policies for the management of these resources are crucial in assuring that these processes achieve their goals. But complex utilization policies may govern the use of such resources, increasing the difficulty of accurately incorporating resource considerations into complex processes. This dissertation presents an approach to the specification, allocation, and analysis of the management of such resources
Learning Failure-Inducing Models for Testing Software-Defined Networks
Software-defined networks (SDN) enable flexible and effective communication
systems, e.g., data centers, that are managed by centralized software
controllers. However, such a controller can undermine the underlying
communication network of an SDN-based system and thus must be carefully tested.
When an SDN-based system fails, in order to address such a failure, engineers
need to precisely understand the conditions under which it occurs. In this
paper, we introduce a machine learning-guided fuzzing method, named FuzzSDN,
aiming at both (1) generating effective test data leading to failures in
SDN-based systems and (2) learning accurate failure-inducing models that
characterize conditions under which such system fails. This is done in a
synergistic manner where models guide test generation and the latter also aims
at improving the models. To our knowledge, FuzzSDN is the first attempt to
simultaneously address these two objectives for SDNs. We evaluate FuzzSDN by
applying it to systems controlled by two open-source SDN controllers. Further,
we compare FuzzSDN with two state-of-the-art methods for fuzzing SDNs and two
baselines (i.e., simple extensions of these two existing methods) for learning
failure-inducing models. Our results show that (1) compared to the
state-of-the-art methods, FuzzSDN generates at least 12 times more failures,
within the same time budget, with a controller that is fairly robust to fuzzing
and (2) our failure-inducing models have, on average, a precision of 98% and a
recall of 86%, significantly outperforming the baselines
Stress Testing Control Loops in Cyber-Physical Systems
Cyber-Physical Systems (CPSs) are often safety-critical and deployed in
uncertain environments. Identifying scenarios where CPSs do not comply with
requirements is fundamental but difficult due to the multidisciplinary nature
of CPSs. We investigate the testing of control-based CPSs, where control and
software engineers develop the software collaboratively. Control engineers make
design assumptions during system development to leverage control theory and
obtain guarantees on CPS behaviour. In the implemented system, however, such
assumptions are not always satisfied, and their falsification can lead to
guarantees loss. We define stress testing of control-based CPSs as generating
tests to falsify such design assumptions. We highlight different types of
assumptions, focusing on the use of linearised physics models. To generate
stress tests falsifying such assumptions, we leverage control theory to
qualitatively characterise the input space of a control-based CPS. We propose a
novel test parametrisation for control-based CPSs and use it with the input
space characterisation to develop a stress testing approach. We evaluate our
approach on three case study systems, including a drone, a continuous-current
motor (in five configurations), and an aircraft.Our results show the
effectiveness of the proposed testing approach in falsifying the design
assumptions and highlighting the causes of assumption violations.Comment: Accepted for publication in August 2023 on the ACM Transactions on
Software Engineering and Methodology (TOSEM
Using Machine Learning to Assist with the Selection of Security Controls During Security Assessment
In many domains such as healthcare and banking, IT systems need to fulfill various requirements related to security. The elaboration of security requirements for a given system is in part guided by the controls envisaged by the applicable security standards and best practices. An important difficulty that analysts have to contend with during security requirements elaboration is sifting through a large number of security controls and determining which ones have a bearing on the security requirements for a given system. This challenge is often exacerbated by the scarce security expertise available in most organizations. [Objective] In this article, we develop automated decision support for the identification of security controls that are relevant to a specific system in a particular context. [Method and Results] Our approach, which is based on machine learning, leverages historical data from security assessments performed over past systems in order to recommend security controls for a new system. We operationalize and empirically evaluate our approach using real historical data from the banking domain. Our results show that, when one excludes security controls that are rare in the historical data, our approach has an average recall of ≈ 94% and average precision of ≈ 63%. We further examine through a survey the perceptions of security analysts about the usefulness of the classification models derived from historical data. [Conclusions] The high recall – indicating only a few relevant security controls are missed – combined with the reasonable level of precision – indicating that the effort required to confirm recommendations is not excessive – suggests that our approach is a useful aid to analysts for more efficiently identifying the relevant security controls, and also for decreasing the likelihood that important controls would be overlooked. Further, our survey results suggest that the generated classification models help provide a documented and explicit rationale for choosing the applicable security controls
Schedulability Analysis of Real-Time Systems with Uncertain Worst-Case Execution Times
Schedulability analysis is about determining whether a given set of real-time
software tasks are schedulable, i.e., whether task executions always complete
before their specified deadlines. It is an important activity at both early
design and late development stages of real-time systems. Schedulability
analysis requires as input the estimated worst-case execution times (WCET) for
software tasks. However, in practice, engineers often cannot provide precise
point WCET estimates and prefer to provide plausible WCET ranges. Given a set
of real-time tasks with such ranges, we provide an automated technique to
determine for what WCET values the system is likely to meet its deadlines, and
hence operate safely. Our approach combines a search algorithm for generating
worst-case scheduling scenarios with polynomial logistic regression for
inferring safe WCET ranges. We evaluated our approach by applying it to a
satellite on-board system. Our approach efficiently and accurately estimates
safe WCET ranges within which deadlines are likely to be satisfied with high
confidence
Optimal Priority Assignment for Real-Time Systems: A Coevolution-Based Approach
In real-time systems, priorities assigned to real-time tasks determine the order of task executions, by relying on an underlying task scheduling policy. Assigning optimal priority values to tasks is critical to allow the tasks to complete their executions while maximizing safety margins from their specified deadlines. This enables real-time systems to tolerate unexpected overheads in task executions and still meet their deadlines. In practice, priority assignments result from an interactive process between the development and testing teams. In this article, we propose an automated method that aims to identify the best possible priority assignments in real-time systems, accounting for multiple objectives regarding safety margins and engineering constraints. Our approach is based on a multi-objective, competitive coevolutionary algorithm mimicking the interactive priority assignment process between the development and testing teams. We evaluate our approach by applying it to six industrial systems from different domains and several synthetic systems. The results indicate that our approach significantly outperforms both our baselines, i.e., random search and sequential search, and solutions defined by practitioners. Our approach scales to complex industrial systems as an offline analysis method that attempts to find near-optimal solutions within acceptable time, i.e., less than 16 hours
Discrete-Event Simulation and Integer Linear Programming for Constraint-Aware Resource Scheduling
This paper presents a method for scheduling resources in complex systems that integrate humans with diverse hardware and software components, and for studying the impact of resource schedules on system characteristics. The method uses discrete-event simulation and integer linear programming, and relies on detailed models of the system’s processes, specifications of the capabilities of the system’s resources, and constraints on the operations of the system and its resources. As a case study, we examine processes involved in the operation of a hospital emergency department, studying the impact staffing policies have on such key quality measures as patient length of stay (LoS), number of handoffs, staff utilization levels, and cost. Our results suggest that physician and nurse utilization levels for clinical tasks of 70% result in a good balance between LoS and cost. Allowing shift lengths to vary and shifts to overlap increases scheduling flexibility. Clinical experts provided face validation of our results. Our approach improves on the state of the art by enabling using detailed resource and constraint specifications effectively to support analysis and decision making about complex processes in domains that currently rely largely on trial and error and other ad hoc methods
Decision Support for Security-Control Identification Using Machine Learning
[Context & Motivation] In many domains such as healthcare and banking, IT systems need to fulfill various requirements related to security. The elaboration of security requirements for a given system is in part guided by the controls envisaged by the applicable security standards and best practices. [Problem] An important difficulty that analysts have to contend with during security requirements elaboration is sifting through a large number of security controls and determining which ones have a bearing on the security requirements for a given system. This challenge is often exacerbated by the scarce security expertise available in most organizations. [Principal ideas/results] In this paper, we develop automated decision support for the identification of security controls that are relevant to a specific system in a particular context. Our approach, which is based on machine learning, leverages historical data from security assessments performed over past systems in order to recommend security controls for a new system. We operationalize and empirically evaluate our approach using real historical data from the banking domain. Our results show that, when one excludes security controls that are rare in the historical data, our approach has an average recall of ≈ 95% and average precision of ≈ 67%. [Contribution] The high recall – indicating only a few relevant security controls are missed – combined with the reasonable level of precision – indicating that the effort required to confirm recommendations is not excessive – suggests that our approach is a useful aid to analysts for more efficiently identifying the relevant security controls, and also for decreasing the likelihood that important controls would be overlooked